security operation
Collaborative Intelligence: Topic Modelling of Large Language Model use in Live Cybersecurity Operations
Lochner, Martin, Keplinger, Keegan
Objective: This work describes the topic modelling of Security Operations Centre (SOC) use of a large language model (LLM), during live security operations. The goal is to better understand how these specialists voluntarily use this tool. Background: Human-automation teams have been extensively studied, but transformer-based language models have sparked a new wave of collaboration. SOC personnel at a major cybersecurity provider used an LLM to support live security operations. This study examines how these specialists incorporated the LLM into their work. Method: Our data set is the result of 10 months of SOC operators accessing GPT-4 over an internally deployed HTTP-based chat application. We performed two topic modelling exercises, first using the established BERTopic model (Grootendorst, 2022), and second, using a novel topic modeling workflow. Results: Both the BERTopic analysis and novel modelling approach revealed that SOC operators primarily used the LLM to facilitate their understanding of complex text strings. Variations on this use-case accounted for ~40% of SOC LLM usage. Conclusion: SOC operators are required to rapidly interpret complex commands and similar information. Their natural tendency to leverage LLMs to support this activity indicates that their workflow can be supported and augmented by designing collaborative LLM tools for use in the SOC. Application: This work can aid in creating next-generation tools for Security Operations Centres. By understanding common use-cases, we can develop workflows supporting SOC task flow. One example is a right-click context menu for executing a command line analysis LLM call directly in the SOC environment.
- North America > Canada (0.05)
- Europe > Ireland (0.04)
- Oceania > Australia (0.04)
- North America > United States > Alaska > Kodiak Island Borough > Kodiak (0.04)
- Workflow (1.00)
- Research Report > Experimental Study (0.48)
Generative AI in Live Operations: Evidence of Productivity Gains in Cybersecurity and Endpoint Management
Bono, James, Grana, Justin, Karakolios, Kleanthis, Ramakrishna, Pruthvi Hanumanthapura, Srivastava, Ankit
We measure the association between generative AI (GAI) tool adoption and four metrics spanning security operations, information protection, and endpoint management: 1) number of security alerts per incident, 2) probability of security incident reopenings, 3) time to classify a data loss prevention alert, and 4) time to resolve device policy conflicts. We find that GAI is associated with robust and statistically and practically significant improvements in the four metrics. Although unobserved confounders inhibit causal identification, these results are among the first to use observational data from live operations to investigate the relationship between GAI adoption and security operations, data loss prevention, and device policy management.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Generation (0.35)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.35)
What Is Extended Detection and Response (XDR)? - Big Data Analytics News
XDR, or Extended Detection and Response, is an emerging security technology that is rapidly gaining popularity in the cybersecurity industry. It is a comprehensive security solution that offers a unified approach to threat detection, investigation, and response across multiple endpoints, networks, and cloud environments. In today's digital age, cyber threats are becoming increasingly sophisticated and diverse, making it difficult for organizations to detect and respond to them in a timely and effective manner. Traditional security solutions, such as antivirus software, firewalls, and intrusion detection systems, are no longer sufficient to protect against the complex and evolving threat landscape. It collects and correlates data from various sources, including endpoints, network devices, and cloud platforms, and applies advanced analytics and machine learning algorithms to identify suspicious activity and potential threats.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.70)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Data Science > Data Mining > Big Data (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (0.99)
Director, Security Operations at Veritone - United States
We are driven by the belief that Artificial Intelligence is mankind's greatest invention. It is the key to building a safer, more vibrant, transparent, and empowered society. We are determined to be an active contributor to shaping our future for the better. We care about the ethical implications of AI and the prosperity and well-being of all individuals, as well as the growth and continued successes of our employees, customers, and partners. Veritone's mission today is more important than ever.
The Next Generation of Threat Detection Will Require Both Human and Machine Expertise
There is a debate in the world of cybersecurity about whether to use human or machine expertise. However, this is a false dichotomy: Truly effective threat detection and response need both kinds of expertise working in tandem. It will be years before machines completely replace the humans who perform typical detection and response tasks. What we predict for the meantime is a symbiotic relationship between humans and machines. The combination means that detection of and response to threats can be faster and more intelligent.
Use of advanced technologies touted as legacy of Tokyo Games
While this summer's Tokyo Olympics and Paralympics took place without being hit by major incidents such as a terrorist attack thanks to the unprecedented scale of security operations by police and the Games' organizing committee, an expert touted the use of cutting-edge technologies, including a facial recognition system, as well as public-private cooperation as a legacy from the events. The Tokyo Games organizing committee formed a joint venture of 553 security service companies from around Japan, with up to 14,000 personnel mobilized per day to guard the athletes village and competition venues. About 59,900 police officers were gathered from police departments from across the country, including those belonging to Tokyo's Metropolitan Police Department. A facial recognition system was used for the first time in Olympic and Paralympic history for personal identification of athletes and staff officials entering the athletes village, match venues and other places related to the Tokyo Games. More than 300 face recognition devices for the system, developed by Japanese electronics giant NEC Corp., were installed.
Ethical AI needs to thrive in SecOps: 3 key guidelines
Security operations centers (SOCs) increasingly rely on network data flows as they collect telemetry from devices and monitor user behaviors. To make these massive data flows manageable, SOCs turn to rules, machine learning, and artificial (or augmented) intelligence to triage, de-duplicate, and add context to the alerts about potential dangerous or malicious activity. Pushing the boundaries of what machine learning can deliver when nourished by massive data has already led to significant invasions of privacy, especially when the efforts are driven by business demands. More often than not, ethics has taken a back seat when applying machine learning and AI. Companies such as ClearView AI and Cambridge Analytica have vastly overreached in their analysis of consumer data because they could, using consumer data without explicit permission and offering nothing in return.
MSPs are Bolstering Security Programs with Machine Learning and Automation
Advanced threats, a shortage of security experts and the rise in work-from-home together form a catalyst for MSPs to enhance cybersecurity effectiveness for their customers. As MSPs seek ways to increase efficiency and do more with less, they're turning to advanced analytical capabilities like machine learning, security analytics and automation. All of these have moved past their initial hype cycle and are now adopted and delivering enhanced ROI and outcomes in IT and cybersecurity. "The future of your business is Big Data and Machine Learning tied to the business opportunities and customer challenges before you." Machine learning and automation are more than popular buzzwords in the cybersecurity industry.
Security Pros Trust Cyberthreat Detections Verified by Humans over AI
WhiteHat Security, an application security provider for enterprises' businesses and an independent subsidiary of NTT Ltd., stated that over half of organizations globally use artificial intelligence (AI) or machine learning in their security operations, however, 60% of them are more confident in cyberthreat detections verified by humans over AI. In its research, "AI and Human Element Security Sentiment Study", WhiteHat Security highlighted the need for security organizations to incorporate both AI- and human-centric offerings, especially in the application security space. According to research findings, based on the responses of 102 professionals in the cybersecurity industry, 45% of respondents opined that their companies lack a sufficiently staffed cybersecurity team. Over 70% of respondents agreed that AI-based tools made their security teams more efficient by eliminating over 55% of everyday security operations. Incorporating AI tools into security operations decreased employees' stress levels, according to 40% of respondents.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.66)
What is AI for Security Operations?
There is a lot of excitement around AI for SecOps. From a market perspective, AI in cybersecurity is projected to grow by a CAGR of 23.3% between 2019 and 2026 to exceed $38B. On the physical security front, AI- powered video analytics market driven primarily by security and safety, is projected to grow by a CAGR of 22.3% between 2018 and 2025 to reach $4.5B2. From a value perspective, securityintelligence.com has an insightful article titled - Artificial Intelligence (AI) and Security: A Match Made in the SOC – where it says " In summary, when security analysts partner with artificial intelligence, the benefits include streamlined threat detection, investigation and response processes, increased productivity, and improved job satisfaction -- analysts spend more time doing what they enjoy most and the cost of security breaches decreases." It is a well-known fact that the talent war is real in security operations.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.45)